📚 node [[regularization|regularization]]
Welcome! Nobody has contributed anything to 'regularization|regularization' yet. You can:
-
Write something in the document below!
- There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
- Write to the Agora from social media.
-
Sign up as a full Agora user.
- As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[l1_regularization]]
⥅ related node [[l2_regularization]]
⥅ related node [[regularization]]
⥅ related node [[regularization_rate]]
⥅ related node [[ridge_regularization]]
⥅ node [[regularization]] pulled by Agora
📓
garden/KGBicheno/Artificial Intelligence/Introduction to AI/Week 3 - Introduction/Definitions/Regularization.md by @KGBicheno
regularization
Go back to the [[AI Glossary]]
The penalty on a model's complexity. Regularization helps prevent overfitting. Different kinds of regularization include:
L1 regularization
L2 regularization
dropout regularization
early stopping (this is not a formal regularization method, but can effectively limit overfitting)
⥅ node [[regularization_rate]] pulled by Agora
📓
garden/KGBicheno/Artificial Intelligence/Introduction to AI/Week 3 - Introduction/Definitions/Regularization_Rate.md by @KGBicheno
regularization rate
Go back to the [[AI Glossary]]
A scalar value, represented as lambda, specifying the relative importance of the regularization function. The following simplified loss equation shows the regularization rate's influence:
Raising the regularization rate reduces overfitting but may make the model less accurate.
📖 stoas
- public document at doc.anagora.org/regularization|regularization
- video call at meet.jit.si/regularization|regularization
🔎 full text search for 'regularization|regularization'